252 research outputs found

    Maximum-a-posteriori estimation with Bayesian confidence regions

    Full text link
    Solutions to inverse problems that are ill-conditioned or ill-posed may have significant intrinsic uncertainty. Unfortunately, analysing and quantifying this uncertainty is very challenging, particularly in high-dimensional problems. As a result, while most modern mathematical imaging methods produce impressive point estimation results, they are generally unable to quantify the uncertainty in the solutions delivered. This paper presents a new general methodology for approximating Bayesian high-posterior-density credibility regions in inverse problems that are convex and potentially very high-dimensional. The approximations are derived by using recent concentration of measure results related to information theory for log-concave random vectors. A remarkable property of the approximations is that they can be computed very efficiently, even in large-scale problems, by using standard convex optimisation techniques. In particular, they are available as a by-product in problems solved by maximum-a-posteriori estimation. The approximations also have favourable theoretical properties, namely they outer-bound the true high-posterior-density credibility regions, and they are stable with respect to model dimension. The proposed methodology is illustrated on two high-dimensional imaging inverse problems related to tomographic reconstruction and sparse deconvolution, where the approximations are used to perform Bayesian hypothesis tests and explore the uncertainty about the solutions, and where proximal Markov chain Monte Carlo algorithms are used as benchmark to compute exact credible regions and measure the approximation error

    Revisiting maximum-a-posteriori estimation in log-concave models

    Get PDF
    Maximum-a-posteriori (MAP) estimation is the main Bayesian estimation methodology in imaging sciences, where high dimensionality is often addressed by using Bayesian models that are log-concave and whose posterior mode can be computed efficiently by convex optimisation. Despite its success and wide adoption, MAP estimation is not theoretically well understood yet. The prevalent view in the community is that MAP estimation is not proper Bayesian estimation in a decision-theoretic sense because it does not minimise a meaningful expected loss function (unlike the minimum mean squared error (MMSE) estimator that minimises the mean squared loss). This paper addresses this theoretical gap by presenting a decision-theoretic derivation of MAP estimation in Bayesian models that are log-concave. A main novelty is that our analysis is based on differential geometry, and proceeds as follows. First, we use the underlying convex geometry of the Bayesian model to induce a Riemannian geometry on the parameter space. We then use differential geometry to identify the so-called natural or canonical loss function to perform Bayesian point estimation in that Riemannian manifold. For log-concave models, this canonical loss is the Bregman divergence associated with the negative log posterior density. We then show that the MAP estimator is the only Bayesian estimator that minimises the expected canonical loss, and that the posterior mean or MMSE estimator minimises the dual canonical loss. We also study the question of MAP and MSSE estimation performance in large scales and establish a universal bound on the expected canonical error as a function of dimension, offering new insights into the good performance observed in convex problems. These results provide a new understanding of MAP and MMSE estimation in log-concave settings, and of the multiple roles that convex geometry plays in imaging problems.Comment: Accepted for publication in SIAM Imaging Science

    Proximal Markov chain Monte Carlo algorithms

    Get PDF
    This paper presents a new Metropolis-adjusted Langevin algorithm (MALA) that uses convex analysis to simulate efficiently from high-dimensional densities that are log-concave, a class of probability distributions that is widely used in modern high-dimensional statistics and data analysis. The method is based on a new first-order approximation for Langevin diffusions that exploits log-concavity to construct Markov chains with favourable convergence properties. This approximation is closely related to Moreau--Yoshida regularisations for convex functions and uses proximity mappings instead of gradient mappings to approximate the continuous-time process. The proposed method complements existing MALA methods in two ways. First, the method is shown to have very robust stability properties and to converge geometrically for many target densities for which other MALA are not geometric, or only if the step size is sufficiently small. Second, the method can be applied to high-dimensional target densities that are not continuously differentiable, a class of distributions that is increasingly used in image processing and machine learning and that is beyond the scope of existing MALA and HMC algorithms. To use this method it is necessary to compute or to approximate efficiently the proximity mappings of the logarithm of the target density. For several popular models, including many Bayesian models used in modern signal and image processing and machine learning, this can be achieved with convex optimisation algorithms and with approximations based on proximal splitting techniques, which can be implemented in parallel. The proposed method is demonstrated on two challenging high-dimensional and non-differentiable models related to image resolution enhancement and low-rank matrix estimation that are not well addressed by existing MCMC methodology

    Uncertainty quantification for radio interferometric imaging: II. MAP estimation

    Get PDF
    Uncertainty quantification is a critical missing component in radio interferometric imaging that will only become increasingly important as the big-data era of radio interferometry emerges. Statistical sampling approaches to perform Bayesian inference, like Markov Chain Monte Carlo (MCMC) sampling, can in principle recover the full posterior distribution of the image, from which uncertainties can then be quantified. However, for massive data sizes, like those anticipated from the Square Kilometre Array (SKA), it will be difficult if not impossible to apply any MCMC technique due to its inherent computational cost. We formulate Bayesian inference problems with sparsity-promoting priors (motivated by compressive sensing), for which we recover maximum a posteriori (MAP) point estimators of radio interferometric images by convex optimisation. Exploiting recent developments in the theory of probability concentration, we quantify uncertainties by post-processing the recovered MAP estimate. Three strategies to quantify uncertainties are developed: (i) highest posterior density credible regions; (ii) local credible intervals (cf. error bars) for individual pixels and superpixels; and (iii) hypothesis testing of image structure. These forms of uncertainty quantification provide rich information for analysing radio interferometric observations in a statistically robust manner. Our MAP-based methods are approximately 10510^5 times faster computationally than state-of-the-art MCMC methods and, in addition, support highly distributed and parallelised algorithmic structures. For the first time, our MAP-based techniques provide a means of quantifying uncertainties for radio interferometric imaging for realistic data volumes and practical use, and scale to the emerging big-data era of radio astronomy.Comment: 13 pages, 10 figures, see companion article in this arXiv listin

    Collaborative sparse regression using spatially correlated supports - Application to hyperspectral unmixing

    Get PDF
    This paper presents a new Bayesian collaborative sparse regression method for linear unmixing of hyperspectral images. Our contribution is twofold; first, we propose a new Bayesian model for structured sparse regression in which the supports of the sparse abundance vectors are a priori spatially correlated across pixels (i.e., materials are spatially organised rather than randomly distributed at a pixel level). This prior information is encoded in the model through a truncated multivariate Ising Markov random field, which also takes into consideration the facts that pixels cannot be empty (i.e, there is at least one material present in each pixel), and that different materials may exhibit different degrees of spatial regularity. Secondly, we propose an advanced Markov chain Monte Carlo algorithm to estimate the posterior probabilities that materials are present or absent in each pixel, and, conditionally to the maximum marginal a posteriori configuration of the support, compute the MMSE estimates of the abundance vectors. A remarkable property of this algorithm is that it self-adjusts the values of the parameters of the Markov random field, thus relieving practitioners from setting regularisation parameters by cross-validation. The performance of the proposed methodology is finally demonstrated through a series of experiments with synthetic and real data and comparisons with other algorithms from the literature

    Sampling from a log-concave distribution with compact support with proximal Langevin Monte Carlo

    Get PDF
    This paper presents a detailed theoretical analysis of the Langevin Monte Carlo sampling algorithm recently introduced in Durmus et al. (Efficient Bayesian computation by proximal Markov chain Monte Carlo: when Langevin meets Moreau, 2016) when applied to log-concave probability distributions that are restricted to a convex body K\mathsf{K}. This method relies on a regularisation procedure involving the Moreau-Yosida envelope of the indicator function associated with K\mathsf{K}. Explicit convergence bounds in total variation norm and in Wasserstein distance of order 11 are established. In particular, we show that the complexity of this algorithm given a first order oracle is polynomial in the dimension of the state space. Finally, some numerical experiments are presented to compare our method with competing MCMC approaches from the literature

    Systematic Characterization of Gas Phase Binary Pre-Nucleation Complexes Containing H2SO4 + X, [ X = NH3, (CH3)NH2, (CH3)2NH, (CH3)3N, H2O, (CH3)OH, (CH3)2O, HF, CH3F, PH3, (CH3)PH2, (CH3)2PH, (CH3)3P, H2S, (CH3)SH, (CH3)2S, HCl, (CH3)Cl)]. A Computational Study

    Get PDF
    A systematic characterization of gas phase binary prenucleation complexes between H2SO4 (SA) and other molecules present in the atmosphere (NH3, (CH3)NH2, (CH3)2NH, (CH3)3N, H2O, (CH3)OH, (CH3)2O, HF, CH3 F, PH3, (CH3)PH2, (CH3)2PH, (CH3)3P, H2S, (CH3)SH, (CH3)2S, HCl, (CH3)Cl) has been carried out using the ωB97X-D/6-311++(2d,2p) method at the DFT level of theory. A relationship between the energy gap of the SA's LUMO and the partner molecule's HOMO, and the increasing number of methyl groups -CH3 in the SA's partner molecule is provided. The binding energies of the bimolecular complexes are found to be related to the electron density in the hydrogen bond critical point, the HOMO-LUMO energy gap, the nature of the hydrogen acceptor atom, and the frequencies shift of acid OH bonds. The results show how the frontier orbital compatibility determines the binding energy and that the properties of SA's OH bond which remains free of interactions are affected by the bimolecular adduct formation.Fil: Sebastianelli, Paolo. Consejo Nacional de Investigaciones Científicas y Técnicas; Argentina. Universidad Nacional de Córdoba. Facultad de Matemática, Astronomía y Física; Argentina. Universidad Nacional de La Pampa. Facultad de Ciencias Exactas y Naturales; ArgentinaFil: Cometto, Pablo Marcelo. Universidad Nacional de La Pampa. Facultad de Ciencias Exactas y Naturales; Argentina. Consejo Nacional de Investigaciones Científicas y Técnicas. Instituto de Ciencias de la Tierra y Ambientales de La Pampa. Universidad Nacional de La Pampa. Facultad de Ciencias Exactas y Naturales. Instituto de Ciencias de la Tierra y Ambientales de La Pampa; ArgentinaFil: Pereyra, Rodolfo Guillermo. Universidad Nacional de Córdoba. Facultad de Matemática, Astronomía y Física; Argentina. Consejo Nacional de Investigaciones Científicas y Técnicas. Centro Científico Tecnológico Conicet - Córdoba. Instituto de Física Enrique Gaviola. Universidad Nacional de Córdoba. Instituto de Física Enrique Gaviola; Argentin

    Dispositivos de exclusión simbólica en las noticias : la criminalización mediática

    Get PDF
    Fil: Pereyra, Marcelo R. Universidad de Buenos Aires. Facultad Ciencias Sociales; Argentina.Los discursos informativos pueden ser entendidos como relatos de control social en la medida en que\nnaturalizan el accionar represivo de las agencias policiales y judiciales. Sin embargo, es posible\npensar también que la narración de las agendas informativas se ha transformado en un dispositivo de\nexclusión simbólica de los sectores sociales marginados. Por lo general, estos sectores son\ncriminalizados tanto en la información sobre el delito como en la de las expresiones públicas de\nprotesta. Los cambios operados en los últimos años en la construcción de la noticia permiten\ndesentrañar el significado político de esa criminalización mediática

    Computing the Cramer-Rao bound of Markov random field parameters: Application to the Ising and the Potts models

    Get PDF
    This report considers the problem of computing the Cramer-Rao bound for the parameters of a Markov random field. Computation of the exact bound is not feasible for most fields of interest because their likelihoods are intractable and have intractable derivatives. We show here how it is possible to formulate the computation of the bound as a statistical inference problem that can be solve approximately, but with arbitrarily high accuracy, by using a Monte Carlo method. The proposed methodology is successfully applied on the Ising and the Potts models.% where it is used to assess the performance of three state-of-the art estimators of the parameter of these Markov random fields
    corecore